58 research outputs found

    Integration of prostate cancer clinical data using an ontology

    Get PDF
    AbstractIt is increasingly important for investigators to efficiently and effectively access, interpret, and analyze the data from diverse biological, literature, and annotation sources in a unified way. The heterogeneity of biomedical data and the lack of metadata are the primary sources of the difficulty for integration, presenting major challenges to effective search and retrieval of the information. As a proof of concept, the Prostate Cancer Ontology (PCO) is created for the development of the Prostate Cancer Information System (PCIS). PCIS is applied to demonstrate how the ontology is utilized to solve the semantic heterogeneity problem from the integration of two prostate cancer related database systems at the Fox Chase Cancer Center. As the results of the integration process, the semantic query language SPARQL is applied to perform the integrated queries across the two database systems based on PCO

    Ethical, legal, and social implications of learning health systems

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/141917/1/lrh210051-sup-0001-Supplementary_info.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/141917/2/lrh210051.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/141917/3/lrh210051_am.pd

    Vitamin D deficiency is associated with IL-6 levels and monocyte activation in HIV-infected persons

    Get PDF
    Immune activation plays a key role in HIV pathogenesis. Markers of inflammation have been associated with vitamin D deficiency in the general population. Studies have also demonstrated associations of vitamin D deficiency with increased risk of HIV progression and death. The relationship between persistent inflammation and immune activation during chronic HIV infection and vitamin D deficiency remains unclear.Cryopreserved specimens were analyzed from 663 participants at the time of enrollment from the Study to Understand the Natural History of HIV/AIDS in the Era of Effective Therapy (SUN Study) from 2004 to 2006. Biomarkers of inflammation, atherosclerosis, and coagulation were measured using enzyme-linked immunosorbent assays (ELISAs) and electrochemiluminescence. 25(OH)D, the stable precursor form of vitamin D, was measured using a radioimmunoassay with levels defined as: normal (≄30ng/mL), insufficient (20-29 ng/mL) and deficient (<20 ng/mL). Monocyte phenotypes were assessed by flow cytometry. Linear and logistic regression models were used to determine statistical associations between biomarkers and vitamin D deficiency.25(OH)D levels were deficient in 251 (38%) participants, insufficient in 222 (34%), and normal in 190 (29%). Patients with vitamin D deficiency, when compared to those with insufficient or normal vitamin D levels, had increased levels of IL-6 (23%; p<0.01), TNF-α (21%, p = 0.03), D-dimer (24%, p = 0.01), higher proportions of CD14dimCD16+ (22%, p<0.01) and CX3CR1+ monocytes (48%; p<0.001) and decreased frequency of CCR2+ monocytes (-3.4%, p<0.001). In fully adjusted models, vitamin D associations with abnormal biomarker levels persisted for IL-6 levels and CX3CR1+ and CCR2+ phenotypes.Vitamin D deficiency is associated with greater inflammation and activated monocyte phenotypes. The role of vitamin D deficiency in persistent immune activation and associated complications during chronic HIV disease should be further evaluated as a possible target for intervention

    FuGEFlow: data model and markup language for flow cytometry

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Flow cytometry technology is widely used in both health care and research. The rapid expansion of flow cytometry applications has outpaced the development of data storage and analysis tools. Collaborative efforts being taken to eliminate this gap include building common vocabularies and ontologies, designing generic data models, and defining data exchange formats. The Minimum Information about a Flow Cytometry Experiment (MIFlowCyt) standard was recently adopted by the International Society for Advancement of Cytometry. This standard guides researchers on the information that should be included in peer reviewed publications, but it is insufficient for data exchange and integration between computational systems. The Functional Genomics Experiment (FuGE) formalizes common aspects of comprehensive and high throughput experiments across different biological technologies. We have extended FuGE object model to accommodate flow cytometry data and metadata.</p> <p>Methods</p> <p>We used the MagicDraw modelling tool to design a UML model (Flow-OM) according to the FuGE extension guidelines and the AndroMDA toolkit to transform the model to a markup language (Flow-ML). We mapped each MIFlowCyt term to either an existing FuGE class or to a new FuGEFlow class. The development environment was validated by comparing the official FuGE XSD to the schema we generated from the FuGE object model using our configuration. After the Flow-OM model was completed, the final version of the Flow-ML was generated and validated against an example MIFlowCyt compliant experiment description.</p> <p>Results</p> <p>The extension of FuGE for flow cytometry has resulted in a generic FuGE-compliant data model (FuGEFlow), which accommodates and links together all information required by MIFlowCyt. The FuGEFlow model can be used to build software and databases using FuGE software toolkits to facilitate automated exchange and manipulation of potentially large flow cytometry experimental data sets. Additional project documentation, including reusable design patterns and a guide for setting up a development environment, was contributed back to the FuGE project.</p> <p>Conclusion</p> <p>We have shown that an extension of FuGE can be used to transform minimum information requirements in natural language to markup language in XML. Extending FuGE required significant effort, but in our experiences the benefits outweighed the costs. The FuGEFlow is expected to play a central role in describing flow cytometry experiments and ultimately facilitating data exchange including public flow cytometry repositories currently under development.</p

    Architecture and usability of OntoKeeper, an ontology evaluation tool

    Full text link
    Abstract Background The existing community-wide bodies of biomedical ontologies are known to contain quality and content problems. Past research has revealed various errors related to their semantics and logical structure. Automated tools may help to ease the ontology construction, maintenance, assessment and quality assurance processes. However, there are relatively few tools that exist that can provide this support to knowledge engineers. Method We introduce OntoKeeper as a web-based tool that can automate quality scoring for ontology developers. We enlisted 5 experienced ontologists to test the tool and then administered the System Usability Scale to measure their assessment. Results In this paper, we present usability results from 5 ontologists revealing high system usability of OntoKeeper, and use-cases that demonstrate its capabilities in previous published biomedical ontology research. Conclusion To the best of our knowledge, OntoKeeper is the first of a few ontology evaluation tools that can help provide ontology evaluation functionality for knowledge engineers with good usability.https://deepblue.lib.umich.edu/bitstream/2027.42/152214/1/12911_2019_Article_859.pd

    Security and privacy requirements for a multi-institutional cancer research data grid: an interview-based study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Data protection is important for all information systems that deal with human-subjects data. Grid-based systems – such as the cancer Biomedical Informatics Grid (caBIG) – seek to develop new mechanisms to facilitate real-time federation of cancer-relevant data sources, including sources protected under a variety of regulatory laws, such as HIPAA and 21CFR11. These systems embody new models for data sharing, and hence pose new challenges to the regulatory community, and to those who would develop or adopt them. These challenges must be understood by both systems developers and system adopters. In this paper, we describe our work collecting policy statements, expectations, and requirements from regulatory decision makers at academic cancer centers in the United States. We use these statements to examine fundamental assumptions regarding data sharing using data federations and grid computing.</p> <p>Methods</p> <p>An interview-based study of key stakeholders from a sample of US cancer centers. Interviews were structured, and used an instrument that was developed for the purpose of this study. The instrument included a set of problem scenarios – difficult policy situations that were derived during a full-day discussion of potentially problematic issues by a set of project participants with diverse expertise. Each problem scenario included a set of open-ended questions that were designed to elucidate stakeholder opinions and concerns. Interviews were transcribed verbatim and used for both qualitative and quantitative analysis. For quantitative analysis, data was aggregated at the individual or institutional unit of analysis, depending on the specific interview question.</p> <p>Results</p> <p>Thirty-one (31) individuals at six cancer centers were contacted to participate. Twenty-four out of thirty-one (24/31) individuals responded to our request- yielding a total response rate of 77%. Respondents included IRB directors and policy-makers, privacy and security officers, directors of offices of research, information security officers and university legal counsel. Nineteen total interviews were conducted over a period of 16 weeks. Respondents provided answers for all four scenarios (a total of 87 questions). Results were grouped by broad themes, including among others: governance, legal and financial issues, partnership agreements, de-identification, institutional technical infrastructure for security and privacy protection, training, risk management, auditing, IRB issues, and patient/subject consent.</p> <p>Conclusion</p> <p>The findings suggest that with additional work, large scale federated sharing of data within a regulated environment is possible. A key challenge is developing suitable models for authentication and authorization practices within a federated environment. Authentication – the recognition and validation of a person's identity – is in fact a global property of such systems, while authorization – the permission to access data or resources – mimics data sharing agreements in being best served at a local level. Nine specific recommendations result from the work and are discussed in detail. These include: (1) the necessity to construct separate legal or corporate entities for governance of federated sharing initiatives on this scale; (2) consensus on the treatment of foreign and commercial partnerships; (3) the development of risk models and risk management processes; (4) development of technical infrastructure to support the credentialing process associated with research including human subjects; (5) exploring the feasibility of developing large-scale, federated honest broker approaches; (6) the development of suitable, federated identity provisioning processes to support federated authentication and authorization; (7) community development of requisite HIPAA and research ethics training modules by federation members; (8) the recognition of the need for central auditing requirements and authority, and; (9) use of two-protocol data exchange models where possible in the federation.</p

    Coordinated Evolution of Ontologies of Informed Consent

    Get PDF
    Informed consent, whether for health or behavioral research or clinical treatment, rests on notions of voluntarism, information disclosure and understanding, and the decisionmaking capacity of the person providing consent. Whether consent is for research or treatment, informed consent serves as a safeguard for trust that permissions given by the research participant or patient are upheld across the informed consent (IC) lifecycle. The IC lifecycle involves not only documentation of the consent when originally obtained, but actions that require clear communication of permissions from the initial acquisition of data and specimens through handoffs to, for example, secondary researchers, allowing them access to data or biospecimens referenced in the terms of the original consent
    • 

    corecore